These models were run on mammals to evaluate the relationship between summed population change and body size, trophic level, and lifespan traits, while accounting for differences in System and Class, and including a random effect by Binomial because the trait data was collected at the species level. When modelling summed lambdas, each model includes time series length to account for differences in summed lambdas in shorter vs. longer time series.
The best models included the following variables:
Summed lambdas represent the total change the population has undergone, where negative values mean the population has has a net decline over the period considered, and positive values mean the population has had a net increase in the period considered.
These are models to evaluate the relationship between summed population change and body size, trophic level, and lifespan traits, while accounting for differences in System and Class, and including a random effect by Binomial because the trait data was collected at the species level. Each model includes time series length to account for differences in summed lambdas in shorter vs. longer time series.
# null model
m0 = lmer(sumlambda ~ (1|Binomial) + tslength, data = df)
m1 = lmer(sumlambda ~ System + (1|Binomial) + tslength, data = df)
# adding traits
# body size only
m2 = lmer(sumlambda ~ log10(BodySize) + (1|Binomial) + tslength, data = df)
m3 = lmer(sumlambda ~ log10(BodySize) + System + (1|Binomial) + tslength, data = df)
m4 = lmer(sumlambda ~ System*log10(BodySize) + (1|Binomial) + tslength, data = df)
# trophic level only
m5 = lmer(sumlambda ~ TrophicLevel + (1|Binomial) + tslength, data = df)
m6 = lmer(sumlambda ~ TrophicLevel + System + (1|Binomial) + tslength, data = df)
m7 = lmer(sumlambda ~ System*TrophicLevel + (1|Binomial) + tslength, data = df)
# lifespan only
m8 = lmer(sumlambda ~ LifeSpan + (1|Binomial) + tslength, data = df)
m9 = lmer(sumlambda ~ LifeSpan + System + (1|Binomial) + tslength, data = df)
m10 = lmer(sumlambda ~ System*LifeSpan + (1|Binomial) + tslength, data = df)
# all three traits
m11 = lmer(sumlambda ~ log10(BodySize) + TrophicLevel + (1|Binomial) + tslength, data = df)
m12 = lmer(sumlambda ~ log10(BodySize) + LifeSpan + (1|Binomial) + tslength, data = df)
m13 = lmer(sumlambda ~ TrophicLevel + LifeSpan + (1|Binomial) + tslength, data = df)
m14 = lmer(sumlambda ~ log10(BodySize) + TrophicLevel + LifeSpan + (1|Binomial) + tslength, data = df)
We compared models according to a suite of performance metrics, including:
These performance metrics are then combined into a performance score, which can be used to rank the models. The performance score is based on normalizing all indices (i.e. rescaling them to a range from 0 to 1), and taking the mean value of all indices for each model. his is mostly helpful as an exploratory metric, but is not necessarily enough to base interpretation on.
We can look at the metrics for the top 6 models below:
| Name | Model | R2_conditional | R2_marginal | ICC | RMSE | Sigma | AIC_wt | BIC_wt | Performance_Score |
|---|---|---|---|---|---|---|---|---|---|
| m7 | lmerMod | 0.2178949 | 0.0345982 | 0.1898657 | 0.4678565 | 0.4886512 | 0.0005961 | 0.0000000 | 0.5649782 |
| m6 | lmerMod | 0.2159069 | 0.0347315 | 0.1876943 | 0.4681072 | 0.4884065 | 0.0009987 | 0.0000002 | 0.5402196 |
| m13 | lmerMod | 0.2138352 | 0.0284946 | 0.1907767 | 0.4679915 | 0.4881880 | 0.0000279 | 0.0000001 | 0.5316822 |
| m5 | lmerMod | 0.2109302 | 0.0288631 | 0.1874783 | 0.4684271 | 0.4881286 | 0.0178571 | 0.0002888 | 0.4612871 |
| m14 | lmerMod | 0.2147028 | 0.0409487 | 0.1811729 | 0.4686776 | 0.4886182 | 0.0000026 | 0.0000000 | 0.4606463 |
| m11 | lmerMod | 0.2130763 | 0.0325606 | 0.1865912 | 0.4685419 | 0.4884660 | 0.0004391 | 0.0000009 | 0.4543722 |
Below are two plots, where the models on the x-axis are sorted from best to worst in terms of their performance score.
The first plot shows which variables are in each model. Time series length (tslength) and Binomial were included in all the models. From this plot, we can see that TrophicLevel is included in all of the 6 top models, which means it is likely an important variable to explain the total population change in mammals. BodySize and LifeSpan, on the other hand, are in only 2 of the top 6 models, as is System.
The second plot shows the scores for each metric we can use to compare the models. The performance score, which summarises the scores of all metrics, was used to sort the models. RMSE and Sigma are pretty coherent with this performance score, but we can see that we would have different “top models” if we looked at AIC_wt or BIC_wt rather than performance. In the case of AIC_wt, the best model (m10) would also include LifeSpan (\(System*LifeSpan\)). In the case of BIC_wt, the best model (m13) would also include TrophicLevel and LifeSpan.
The above figure only showed the scores ranked wiwthin each metric, but it is also helpful to look at how quantitatively different these metrics are.
Model 7 was the best model in terms of performance score, RMSE, AIC and BIC:
sumlambda ~ System*TrophicLevel + (1|Binomial) + tslength
## Error: Confidence intervals could not be computed.
## * Reason: "non-conformable arguments"
## * Source: mm %*% vcm
## Error: Confidence intervals could not be computed.
## * Reason: "non-conformable arguments"
## * Source: mm %*% vcm
## Error: Confidence intervals could not be computed.
## * Reason: "non-conformable arguments"
## * Source: mm %*% vcm
## $System
##
## $TrophicLevel
##
## $tslength
Model 6 was second best overall, had the second lowest RMSE (root mean squared error), and had the lowest sigma (residual standard deviation):
sumlambda ~ TrophicLevel + System + (1|Binomial) + tslength
## $TrophicLevel
##
## $System
##
## $tslength
Average lambdas are the average of the annual rates of change in population abundance for the time period considered.
These are models to evaluate the relationship between average population change and body size, trophic level, and lifespan traits, while accounting for differences in System and Class, and including a random effect by Binomial because the trait data was collected at the species level.
rm(m0, m1, m2, m3, m4, m5, m6, m7, m8, m9, m10, m11, m12, m13, m14)
# null model
m0 = lmer(avlambda ~ (1|Binomial), data = df)
m1 = lmer(avlambda ~ System + (1|Binomial), data = df)
# adding traits
# body size only
# class should be in each model because body size is normalised within classes
m2 = lmer(avlambda ~ log10(BodySize) + (1|Binomial), data = df)
m3 = lmer(avlambda ~ log10(BodySize) + System + (1|Binomial), data = df)
m4 = lmer(avlambda ~ System*log10(BodySize) + (1|Binomial), data = df)
# trophic level only
m5 = lmer(avlambda ~ TrophicLevel + (1|Binomial), data = df)
m6 = lmer(avlambda ~ TrophicLevel + System + (1|Binomial), data = df)
m7 = lmer(avlambda ~ System*TrophicLevel + (1|Binomial), data = df)
# lifespan only
m8 = lmer(avlambda ~ LifeSpan + (1|Binomial), data = df)
m9 = lmer(avlambda ~ LifeSpan + System + (1|Binomial), data = df)
m10 = lmer(avlambda ~ System*LifeSpan + (1|Binomial), data = df)
# all three traits
m11 = lmer(avlambda ~ log10(BodySize) + TrophicLevel + (1|Binomial), data = df)
m12 = lmer(avlambda ~ log10(BodySize) + LifeSpan + (1|Binomial), data = df)
m13 = lmer(avlambda ~ TrophicLevel + LifeSpan + (1|Binomial), data = df)
m14 = lmer(avlambda ~ log10(BodySize) + TrophicLevel + LifeSpan + (1|Binomial), data = df)
We compared models according to a suite of performance metrics, including:
These performance metrics are then combined into a performance score, which can be used to rank the models. The performance score is based on normalizing all indices (i.e. rescaling them to a range from 0 to 1), and taking the mean value of all indices for each model. his is mostly helpful as an exploratory metric, but is not necessarily enough to base interpretation on.
We can look at the metrics for the top 6 models below:
| Name | Model | R2_conditional | R2_marginal | ICC | RMSE | Sigma | AIC_wt | BIC_wt | Performance_Score |
|---|---|---|---|---|---|---|---|---|---|
| m0 | lmerMod | 0.1488905 | 0.0000000 | 0.1488905 | 0.1298620 | 0.1343833 | 0.9862347 | 0.9986208 | 0.6276346 |
| m10 | lmerMod | 0.1662056 | 0.0094111 | 0.1582841 | 0.1292813 | 0.1343960 | 0.0000000 | 0.0000000 | 0.5700837 |
| m13 | lmerMod | 0.1644064 | 0.0055093 | 0.1597774 | 0.1293237 | 0.1342535 | 0.0000003 | 0.0000000 | 0.5699181 |
| m5 | lmerMod | 0.1623360 | 0.0053920 | 0.1577948 | 0.1293784 | 0.1341837 | 0.0008716 | 0.0000131 | 0.5588605 |
| m7 | lmerMod | 0.1643309 | 0.0151153 | 0.1515056 | 0.1294144 | 0.1343950 | 0.0000011 | 0.0000000 | 0.5498786 |
| m6 | lmerMod | 0.1624907 | 0.0145367 | 0.1501365 | 0.1295234 | 0.1343726 | 0.0000056 | 0.0000000 | 0.5258056 |
Below are two plots as in the previous section, where the models on the x-axis are sorted from best to worst in terms of their performance score.
The first plot shows which variables are in each model. Binomial is included in all the models. From this plot, we can see that the null model is the best in terms of the performance score. However, LifeSpan and System are in the 2nd best model ( LifeSpan is in the 4th bext model with TrophicLevel too). TrophicLevel is not in the top 2 models, but is in the 4 following ranked models. BodySize does not show up in any of the top models.
The second plot shows the scores for each metric we can use to compare the models. The performance score, which summarises the scores of all metrics, was used to sort the models. RMSE is pretty coherent with this performance score, but we can see that we would have different “top models” if we looked at AIC_wt or BIC_wt rather than performance. In the case of AIC_wt and BIC_wt, the best model (m12) would include BodySize and LifeSpan.
The above figure only showed the scores ranked within each metric, but it is also helpful to look at how quantitatively different these metrics are. For example, even in the top models, the explanatory power of the models is once again quite low (R2_conditional and R2_marginal). Generally, there are once again only very minor differences between the models (if we look at the y-axes).
Let’s look at the top models!
Model 0 was the best model in terms of performance score. However, it is not the cleanest…
avlambda ~ (1|Binomial)
Model 10 was the 2nd best model in terms of performance score. However, this model has high explanatory power (R2_conditional = 0.17, R2_marginal = 0.01) and a very low AIC_wt and BIC_wt (< 0.000001).
avlambda ~ System*LifeSpan + (1|Binomial)
## $System
##
## $LifeSpan
Model 13 was the 3rd best model in terms of performance score.
avlambda ~ TrophicLevel + LifeSpan + (1|Binomial)
## $TrophicLevel
##
## $LifeSpan